# High Rouge Score
Rut5 Base Headline Generation
A Russian news headline generation model based on the T5 architecture, optimized for short news texts, capable of generating summary-style headlines with 6-11 words
Text Generation
Transformers Other

R
wanderer-msk
65
1
Flan T5 Base Samsum
Apache-2.0
This model is a fine-tuned version of google/flan-t5-base on the samsum dialogue summarization dataset, specifically designed for dialogue summarization tasks.
Text Generation
Transformers

F
sharmax-vikas
26
0
Mt5 Vi News Summarization
Apache-2.0
A text summarization model fine-tuned on Vietnamese news datasets based on google/mt5-small, supporting automatic Vietnamese news summarization
Text Generation
Transformers Other

M
ntkhoi
354
0
Barthez Orange Ft
Apache-2.0
A fine-tuned text summarization model based on barthez-orangesum-abstract, excelling in French text summarization tasks
Text Generation
Transformers

B
amasi
26
1
Saved Model Git Base
MIT
A vision-language model fine-tuned on image folder datasets based on microsoft/git-base, primarily used for image caption generation tasks
Image-to-Text
Transformers Other

S
holipori
13
0
Article Summarizer T5 Large
An automatic summarization model trained based on FLAN-T5 large, suitable for English text summarization tasks.
Text Generation
Transformers English

A
aszfcxcgszdx
21
1
Bart Base Facebook
Apache-2.0
This model is a fine-tuned version of facebook/bart-base on the XSum dataset, primarily used for summarization tasks.
Text Generation
Transformers

B
sunsvrv
18
0
Fb Bart Large Finetuned Trade The Event Finance Summarizer
A financial event summarization model fine-tuned based on BART-large architecture, with excellent performance on Rouge metrics
Text Generation
Transformers

F
nickmuchi
24
14
Pegasus Multi News Headline
A news headline generation model fine-tuned from google/pegasus-multi_news, excelling at generating concise headlines from multi-document input
Text Generation
Transformers

P
chinhon
18
3
Distilbart Xsum 12 6
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
1,446
6
Distilbart Cnn 12 6
Apache-2.0
DistilBART is a distilled version of the BART model, specifically optimized for text summarization tasks, significantly improving inference speed while maintaining high performance.
Text Generation English
D
sshleifer
783.96k
278
Bart Base Cnn R2 19.4 D35 Hybrid
Apache-2.0
This is a pruned and optimized BART-base model specifically designed for summarization tasks, retaining 53% of the original model's weights.
Text Generation
Transformers English

B
echarlaix
20
0
Bart German
Apache-2.0
A German abstract generation model fine-tuned on the mlsum de dataset based on facebook/bart-base
Text Generation
Transformers German

B
Shahm
151
11
Distilbart Xsum 6 6
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
147
0
Bart Base Cnn R2 18.7 D23 Hybrid
Apache-2.0
This is a pruned and optimized BART-base model, specifically fine-tuned on the CNN/DailyMail dataset for summarization tasks.
Text Generation
Transformers English

B
echarlaix
18
0
Featured Recommended AI Models